翻訳と辞書
Words near each other
・ Loss (Bass Communion album)
・ Loss (comics)
・ Loss (film)
・ Loss (Mull Historical Society album)
・ Loss and damage
・ Loss and Gain
・ Loss and Regeneration
・ Loss Attributing Qualifying Company
・ Loss aversion
・ Loss Creek
・ Loss Creek (Texas)
・ Loss exchange ratio
・ Loss factor
・ Loss free resistor
・ Loss function
Loss functions for classification
・ Loss given default
・ Loss leader
・ Loss mitigation
・ Loss network
・ Loss of chance in English law
・ Loss of China
・ Loss of citizenship
・ Loss of consortium
・ Loss of control
・ Loss of heterogeneity
・ Loss of heterozygosity
・ Loss of Innocence
・ Loss of right in English law
・ Loss of rights due to conviction for criminal offense


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Loss functions for classification : ウィキペディア英語版
Loss functions for classification

In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems. Given X as the vector space of all possible inputs, and ''Y'' =  as the vector space of all possible outputs, we wish to find a function f: X \mapsto \mathbb which best maps f(\vec) to y. However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same \vec to generate different y. As a result, the goal of the learning problem is to minimize expected risk, defined as
:I() = \displaystyle \int_ V(f(\vec),y) p(\vec,y) \, d\vec \, dy
where V(f(\vec),y) represents the loss function, and p(\vec,y) represents the probability distribution of the data, which can equivalently be written using Bayes' theorem as
:p(\vec,y)=p(y\mid\vec) p(\vec).
In practice, the probability distribution p(\vec,y) is unknown. Consequently, utilizing a training set of n independently and identically distributed samples
:S = \_n,y_n)\}
drawn from the data sample space, one seeks to minimize empirical risk
:I_S() = \frac \sum_^n V( f(\vec_i),y_i).
as a proxy for expected risk.〔 (See statistical learning theory for a more detailed description.)
For computational ease, it is standard practice to write loss functions as functions of only one variable. Within classification, loss functions are generally written solely in terms of the product of the true classifier y and the predicted value f(\vec).〔 〕 Selection of a loss function within this framework
:V(f(\vec),y)=\phi(-yf(\vec))
impacts the optimal f^_S which minimizes empirical risk, as well as the computational complexity of the learning algorithm.
Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0–1 indicator function which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by
:V(f(\vec),y)=H(-yf(\vec))
where H indicates the Heaviside step function.
However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute continuous, convex loss function surrogates which are tractable for commonly used learning algorithms. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allows for the recovery of the actual solution to the original classification problem.〔 〕 Some of these surrogates are described below.
==Bounds for classification==
Utilizing Bayes' theorem, it can be shown that the optimal f^
* for a binary classification problem is equivalent to
:f^
*(\vec) \;=\; \begin 1& \textp(1\mid\vec) > p(-1\mid \vec) \\ -1 & \textp(1\mid\vec) < p(-1\mid\vec) \end
(when p(1\mid\vec) \ne p(-1\mid\vec)).
Furthermore, it can be shown that for any convex loss function V(yf_0(\vec)), where f_0 is the function that minimizes this loss, if f_0(\vec) \ne 0 and V is decreasing in a neighborhood of 0, then f^
*(\vec) = \operatorname(f_0(\vec))
where \operatorname is the sign function (for proof see 〔). Note also that f_0(\vec) \ne 0 in practice when the loss function is differentiable at the origin.
This fact confers a consistency property upon all convex loss functions; specifically, all convex loss functions will lead to consistent results with the 0–1 loss function given the presence of infinite data. Consequently, we can bound the difference of any of these convex loss function from expected risk.〔

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Loss functions for classification」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.